--- title: Retrieval keywords: fastai sidebar: home_sidebar summary: "Module, containing the image retrieval by the global GeM descriptor similarity. Contains the code borrowed from https://github.com/filipradenovic/cnnimageretrieval-pytorch" description: "Module, containing the image retrieval by the global GeM descriptor similarity. Contains the code borrowed from https://github.com/filipradenovic/cnnimageretrieval-pytorch" nb_path: "retrieval.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
from local_feature_tutorial.datasets import *
{% endraw %} {% raw %}

class Flatten[source]

Flatten() :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

{% endraw %} {% raw %}
{% endraw %} {% raw %}

gem[source]

gem(x, p=3, eps=1e-06)

{% endraw %} {% raw %}

l2n[source]

l2n(x, eps=1e-06)

{% endraw %} {% raw %}

powerlaw[source]

powerlaw(x, eps=1e-06)

{% endraw %} {% raw %}

class L2N[source]

L2N(eps=1e-06) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

{% endraw %} {% raw %}

class PowerLaw[source]

PowerLaw(eps=1e-06) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

{% endraw %} {% raw %}

class GeM[source]

GeM(p=3, eps=1e-06) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

{% endraw %} {% raw %}

class ImageRetrievalNet[source]

ImageRetrievalNet(features, lwhiten, pool, whiten, meta) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

{% endraw %} {% raw %}

init_network[source]

init_network(params)

{% endraw %} {% raw %}
{% endraw %} {% raw %}

get_pretrained_retrieval_network[source]

get_pretrained_retrieval_network(arch='')

Initializes and downloads the pretrained GeM global descriptor network

{% endraw %} {% raw %}
{% endraw %} {% raw %}
arch = 'resnet50'
gemnet = get_pretrained_retrieval_network(arch)
{% endraw %} {% raw %}

class ImageRanker[source]

ImageRanker(model, transforms=Compose( ToTensor() Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ))

{% endraw %} {% raw %}
{% endraw %} {% raw %}
from local_feature_tutorial.datasets import *
oxfordimages = get_all_images_in_subdirs('data/oxford5k')

oxford_features= f'data/oxford_{arch}.pth'

IR = ImageRanker(gemnet, transforms=transforms.Compose(
            [transforms.ToPILImage(),
             transforms.Resize(256),
             transforms.ToTensor(),
             transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                   std=[0.229, 0.224, 0.225])]))
IR.process_db_images(oxfordimages, '.', 
                     do_diffusion=False, on_gpu=True, db_save_fname=oxford_features)
    
Loading features from disk
Done
{% endraw %}

Lets search for the similar images with a global descriptor

{% raw %}
import cv2
import matplotlib.pyplot as plt
from local_feature_tutorial.visualization import *

query_fname = oxfordimages[3073]
img = cv2.cvtColor(cv2.imread(query_fname), cv2.COLOR_BGR2RGB)
top_k = IR.get_similar(img, 12)
visualize_grid([x.replace('../','') for x in top_k['paths']])
plt.show(block=True)
{% endraw %}

As we can see, results are quite good, but some of wrong images are in the front of the good correct. Let's try to fix it with the local features

{% raw %}
from local_feature_tutorial.wbs import *
{% endraw %} {% raw %}

ImageRanker.spatially_verify[source]

ImageRanker.spatially_verify(img_fname, results_dict, two_view_matcher, vis=False)

{% endraw %} {% raw %}
{% endraw %}

Now we will match the images in the shortlist and re-order them based on the number of inliers

{% raw %}
wbs = TwoViewMatcher(detector = cv2.SIFT_create(4000, contrastThreshold=-10000,
                                              edgeThreshold=-10000),
                     descriptor = HardNetDesc(),
                     matcher=SNNMMatcher(0.9),
        geom_verif=degensac_Verifier())

topk_verif = IR.spatially_verify(query_fname, top_k, wbs, False)
100.00% [12/12 00:24<00:00]
{% endraw %} {% raw %}
print (topk_verif['num_inl'])
visualize_grid(topk_verif['paths'])
[1756, 173, 143, 140, 66, 29, 26, 25, 22, 21, 19, 13]
{% endraw %}

Now the same, but with matches visualization

{% raw %}
topk_verif = IR.spatially_verify(query_fname, top_k, wbs, True)
100.00% [12/12 00:24<00:00]
{% endraw %}